skip to main content


Search for: All records

Creators/Authors contains: "Schmidt, Samuel J."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. ABSTRACT

    Upcoming deep imaging surveys such as the Vera C. Rubin Observatory Legacy Survey of Space and Time will be confronted with challenges that come with increased depth. One of the leading systematic errors in deep surveys is the blending of objects due to higher surface density in the more crowded images; a considerable fraction of the galaxies which we hope to use for cosmology analyses will overlap each other on the observed sky. In order to investigate these challenges, we emulate blending in a mock catalogue consisting of galaxies at a depth equivalent to 1.3 yr of the full 10-yr Rubin Observatory that includes effects due to weak lensing, ground-based seeing, and the uncertainties due to extraction of catalogues from imaging data. The emulated catalogue indicates that approximately 12 per cent of the observed galaxies are ‘unrecognized’ blends that contain two or more objects but are detected as one. Using the positions and shears of half a billion distant galaxies, we compute shear–shear correlation functions after selecting tomographic samples in terms of both spectroscopic and photometric redshift bins. We examine the sensitivity of the cosmological parameter estimation to unrecognized blending employing both jackknife and analytical Gaussian covariance estimators. An ∼0.025 decrease in the derived structure growth parameter S8 = σ8(Ωm/0.3)0.5 is seen due to unrecognized blending in both tomographies with a slight additional bias for the photo-z-based tomography. This bias is greater than the 2σ statistical error in measuring S8.

     
    more » « less
  2. ABSTRACT

    Recent works have shown that weak lensing magnification must be included in upcoming large-scale structure analyses, such as for the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST), to avoid biasing the cosmological results. In this work, we investigate whether including magnification has a positive impact on the precision of the cosmological constraints, as well as being necessary to avoid bias. We forecast this using an LSST mock catalogue and a halo model to calculate the galaxy power spectra. We find that including magnification has little effect on the precision of the cosmological parameter constraints for an LSST galaxy clustering analysis, where the halo model parameters are additionally constrained by the galaxy luminosity function. In particular, we find that for the LSST gold sample (i < 25.3) including weak lensing magnification only improves the galaxy clustering constraint on Ωm by a factor of 1.03, and when using a very deep LSST mock sample (i < 26.5) by a factor of 1.3. Since magnification predominantly contributes to the clustering measurement and provides similar information to that of cosmic shear, this improvement would be reduced for a combined galaxy clustering and shear analysis. We also confirm that not modelling weak lensing magnification will catastrophically bias the cosmological results from LSST. Magnification must therefore be included in LSST large-scale structure analyses even though it does not significantly enhance the precision of the cosmological constraints.

     
    more » « less